# Q&A System Optimization
Reranker Gte Multilingual Base Msmarco Bce Ep 2
A cross-encoder model trained on the msmarco dataset using the sentence-transformers library, designed for text re-ranking and semantic search
Text Embedding Supports Multiple Languages
R
skfrost19
28
0
Ruri V3 Reranker 310m Preview
Apache-2.0
This is a preview version of a Japanese general-purpose reranking model, trained on the cl-nagoya/ruri-v3-pt-310m base model, specifically designed for Japanese text relevance ranking tasks.
Text Embedding Japanese
R
cl-nagoya
79
0
Reranker Msmarco ModernBERT Base Lambdaloss
Apache-2.0
This is a cross-encoder model fine-tuned from ModernBERT-base, designed for calculating scores of text pairs, suitable for text re-ranking and semantic search tasks.
Text Embedding English
R
tomaarsen
89
4
Reranker Msmarco MiniLM L12 H384 Uncased Lambdaloss
Apache-2.0
This is a cross-encoder model fine-tuned on MiniLM-L12-H384-uncased for text re-ranking and semantic search tasks.
Text Embedding English
R
tomaarsen
1,019
3
Ruri Reranker Stage1 Large
Apache-2.0
Ruri-Reranker is a Japanese text re-ranking model based on sentence-transformers, specifically designed to optimize the relevance ranking between queries and documents.
Text Embedding Japanese
R
cl-nagoya
23
1
Roberta Finetuned City
A model fine-tuned based on deepset/roberta-base-squad2, specific purpose not explicitly stated
Large Language Model
Transformers

R
svo2
19
0
Bert Large Uncased Finetuned Squadv1
A Q&A model fine-tuned on the SQuADv1 dataset based on BERT-large, optimized with second-order pruning technology
Question Answering System
Transformers English

B
RedHatAI
35
1
Distilbert Base Cased Distilled Squad Finetuned Squad
Apache-2.0
This model is a fine-tuned version of distilbert-base-cased-distilled-squad, suitable for question-answering tasks
Question Answering System
Transformers

D
ms12345
14
0
Deberta Base Finetuned Squad1 Aqa Newsqa
MIT
A Q&A model fine-tuned on SQuAD1, AQA, and NewsQA datasets based on the DeBERTa-base model
Question Answering System
Transformers

D
stevemobs
15
0
Distilbert Base Uncased Combined Squad Adversarial
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the SQuAD adversarial dataset, suitable for question-answering tasks.
Question Answering System
Transformers

D
stevemobs
15
0
Bert Large Uncased Squadv1.1 Sparse 80 1x4 Block Pruneofa
Apache-2.0
This model is obtained by fine-tuning a pre-trained 80% 1x4 block sparse Prune OFA BERT-Large model through knowledge distillation, demonstrating excellent performance on the SQuADv1.1 Q&A task.
Question Answering System
Transformers English

B
Intel
15
1
Dkrr Dpr Nq Retriever
FiD is a Q&A system model based on knowledge distillation, improving the efficiency of Q&A systems by distilling the knowledge from the reader model into the retriever.
Question Answering System
Transformers

D
castorini
38
0
Distilbert Base Squad2 Custom Dataset
A model fine-tuned on SQuAD2.0 and custom Q&A datasets based on Distilbert_Base, focusing on efficient Q&A tasks
Question Answering System
Transformers

D
superspray
17
0
Bert Base Uncased Squadv1 X1.84 F88.7 D36 Hybrid Filled V1
MIT
This is a Q&A model optimized via nn_pruning library, retaining 50% of original weights, fine-tuned on SQuAD v1 with F1 score reaching 88.72
Question Answering System
Transformers English

B
madlag
30
0
Distilbert Base Uncased Squad2 With Ner Mit Restaurant With Neg With Repeat
This model is based on the DistilBERT architecture, fine-tuned on the SQuAD2 and MIT Restaurant datasets, supporting both question answering and named entity recognition tasks.
Sequence Labeling
Transformers English

D
andi611
19
0
Bert Large Uncased Whole Word Masking Squad2 With Ner Mit Restaurant With Neg With Repeat
This model is a fine-tuned version of bert-large-uncased-whole-word-masking-squad2 on the squad_v2 and mit_restaurant datasets, supporting token classification tasks.
Sequence Labeling
Transformers English

B
andi611
18
0
Albert Xxlarge V2 Squad2
A Q&A model based on ALBERT XXLarge architecture, fine-tuned for the SQuAD V2 dataset
Question Answering System
Transformers

A
mfeb
150
2
Bert Mini Finetuned Squadv2
This model is based on the BERT-mini architecture, fine-tuned on the SQuAD 2.0 dataset using the M-FAC second-order optimizer for question answering tasks.
Question Answering System
Transformers

B
M-FAC
17
0
Featured Recommended AI Models